Fusion power is the power generated by nuclear fusion reactions. In this kind of reaction, two light atomic nuclei fuse together to form a heavier nucleus and in doing so, release a large amount of energy. In a more general sense, the term can also refer to the production of net usable power from a fusion source, similar to the usage of the term "steam power." Most design studies for fusion power plants involve using the fusion reactions to create heat, which is then used to operate a steam turbine, which drives generators to produce electricity. Except for the use of a thermonuclear heat source, this is similar to most coal, oil, and gas-fired power stations as well as fission-driven nuclear power stations.
As of July 2010[update], the largest experiment was the Joint European Torus (JET). In 1997, JET produced a peak of 16.1 megawatts (21,600 hp) of fusion power (65% of input power), with fusion power of over 10 MW (13,000 hp) sustained for over 0.5 sec. In June 2005, the construction of the experimental reactor ITER, designed to produce several times more fusion power than the power put into the plasma over many minutes, was announced. Project partners were preparing the site in 2008. The production of net electrical power from fusion is planned for DEMO, the next generation experiment after ITER. Additionally, the High Power laser Energy Research facility (HiPER) is undergoing preliminary design for possible construction in the European Union starting around 2010.
Contents |
The reaction cross section σ is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution with thermonuclear fusion, then it is useful to perform an average over the distributions of the product of cross section and velocity. The reaction rate (fusions per volume per time) is <σv> times the product of the reactant number densities:
<σv> increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV (2.2–22 fJ). At these temperatures, well above typical ionization energies (13.6 eV (2.18 aJ) in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of <σv> as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion.
The basic concept behind any fusion reaction is to bring two or more atoms close enough together so that the residual strong force (nuclear force) in their nuclei will pull them together into one larger atom. If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses. The difference in mass is released as energy according to Albert Einstein's mass-energy equivalence formula E = mc2. If the input atoms are sufficiently massive, the resulting fusion product will be heavier than the reactants, in which case the reaction requires an external source of energy. The dividing line between "light" and "heavy" is iron-56. Above this atomic mass, energy will generally be released by nuclear fission reactions; below it, by fusion [1].
Fusion between the atoms is opposed by their shared electrical charge, specifically the net positive charge of the nuclei. To overcome this electrostatic force, or "Coulomb barrier", some external source of energy must be supplied. The easiest way to do this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as bare nuclei. In most experiments the nuclei and electrons are left in a fluid known as a plasma. The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge, so hydrogen, which has the smallest nuclear charge therefore reacts at the lowest temperature. Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. As a consequence, most fusion reactions combine isotopes of hydrogen ("protium", deuterium, or tritium) to form isotopes of helium (3He or 4He).
Perhaps the three most widely considered fuel cycles are based on the D-T, D-D, and p-11B reactions. Other fuel cycles (D-3He and 3He-3He) would require a supply of 3He, either from other nuclear reactions or from extraterrestrial sources, such as the surface of the moon or the atmospheres of the gas giant planets. The details of the calculations comparing these reactions can be found here.
The easiest (according to the Lawson criterion) and most immediately promising nuclear reaction to be used for fusion power is:
Hydrogen-2 (Deuterium) is a naturally occurring isotope of hydrogen and as such is universally available. The large mass ratio of the hydrogen isotopes makes the separation rather easy compared to the difficult uranium enrichment process. Hydrogen-3 (Tritium) is also an isotope of hydrogen, but it occurs naturally in only negligible amounts due to its radioactive half-life of 12.32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions:
The reactant neutron is supplied by the D-T fusion reaction shown above, the one that also produces the useful energy. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic but does not consume the neutron. At least some 7Li reactions are required to replace the neutrons lost by reactions with other elements. Most reactor designs use the naturally occurring mix of lithium isotopes. The supply of lithium is more limited than that of deuterium, but still large enough to supply the world's energy demand for thousands of years.
Several drawbacks are commonly attributed to D-T fusion power:
The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of current fission power reactors, posing problems for material design. Design of suitable materials is under way but their actual use in a reactor is not proposed until the generation after ITER. After a single series of D-T tests at JET, the largest fusion reactor yet to use this fuel, the vacuum vessel was sufficiently radioactive that remote handling needed to be used for the year following the tests.
On the other hand, the volumetric deposition of neutron power can also be seen as an advantage. If all the power of a fusion reactor had to be transported by conduction through the surface enclosing the plasma, it would be very difficult to find materials and a construction that would survive, and it would probably entail a relatively poor efficiency.
Though more difficult to facilitate than the deuterium-tritium reaction, fusion can also be achieved through the reaction of deuterium with itself. This reaction has two branches that occur with nearly equal probability:
2H + 2H | → 3H | + 1H |
→ 3He | + n |
The optimum temperature for this reaction is 15 keV, only slightly higher than the optimum for the D-T reaction. The first branch does not produce neutrons, but it does produce tritium, so that a D-D reactor will not be completely tritium-free, even though it does not require an input of tritium or lithium. Most of the tritium produced will be burned before leaving the reactor, which reduces the tritium handling required, but also means that more neutrons are produced and that some of these are very energetic. The neutron from the second branch has an energy of only 2.45 MeV (0.393 pJ), whereas the neutron from the D-T reaction has an energy of 14.1 MeV (2.26 pJ), resulting in a wider range of isotope production and material damage. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons is only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from limitations of lithium resources and a somewhat softer neutron spectrum. The price to pay compared to D-T is that the energy confinement (at a given pressure) must be 30 times better and the power produced (at a given pressure and volume) is 68 times less.
A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H). This reaction produces a helium-4 nucleus (4He) and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several speculative technologies). In practice, D-D side reactions produce a significant number of neutrons, resulting in p-11B being the preferred cycle for aneutronic fusion.
If aneutronic fusion is the goal, then the most promising candidate may be the Hydrogen-1 (proton)/boron reaction:
Under reasonable assumptions, side reactions will result in about 0.1% of the fusion power being carried by neutrons.[4] At 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T. Since the confinement properties of conventional approaches to fusion such as the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense plasma focus.
The idea of using human-initiated fusion reactions was first made practical for military purposes, in nuclear weapons. In a hydrogen bomb, the energy released by a fission weapon is used to compress and heat fusion fuel, beginning a fusion reaction that releases a large amount of neutrons that increases the rate of fission. The first fission-fusion-fission-based weapons released some 500 times more energy than early fission weapons.
Civilian applications, where explosive energy production must be replaced by a controlled production, are still being developed. Although it took less than ten years to go from military applications to civilian fission energy production,[5] it has been very different in the fusion energy field; more than fifty years have already passed[6] without any commercial fusion energy production plant coming into operation.
Registration of the first patent related to a fusion reactor[7] by the United Kingdom Atomic Energy Authority, the inventors being Sir George Paget Thomson and Moses Blackman, dates back to 1946. Some basic principles used in the ITER experiment are described in this patent: toroidal vacuum chamber, magnetic confinement, and radio frequency plasma heating.
The U.S. fusion program began in 1951 when Lyman Spitzer began work on a stellarator under the code name Project Matterhorn. His work led to the creation of the Princeton Plasma Physics Laboratory, where magnetically confined plasmas are still studied. The stellarator concept fell out of favor for several decades afterwards, plagued by poor confinement issues, but recent advances in computer technology have led to a significant resurgence in interest in these devices. A wide variety of other magnetic geometries were also experimented with, notably with the magnetic mirror. These systems also suffered from similar problems when higher performance versions were constructed.
A new approach was outlined in the theoretical works fulfilled in 1950–1951 by I.E. Tamm and A.D. Sakharov in the Soviet Union, which first discussed a tokamak-like approach. Experimental research on these designs began in 1956 at the Kurchatov Institute in Moscow by a group of Soviet scientists led by Lev Artsimovich. The group constructed the first tokamaks, the most successful being the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, producing the first quasistationary thermonuclear fusion reaction ever.[8] The tokamak was dramatically more efficient than the other approaches of that era, and most research after the 1970s has concentrated on variations of this theme.
The same is true today, where very large tokamaks like ITER are expected to pass several milestones toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful; previous generations of tokamak machines have uncovered new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly, and there is considerable optimism that ITER will meet its goals. If successful, ITER would be followed by a "commercial demonstrator" system, similar in purpose to the very earliest power-producing fission reactors built in the era before wide-scale commercial deployment of larger machines started in the 1960s and 1970s. Even with these goals met, there are a number of major engineering problems remaining, notably finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when its materials becomes embrittled due to the neutron flux. Practical commercial generators based on the tokamak concept are far in the future. The public at large has been disappointed, as the initial outlook for practical fusion power plants was much rosier; a pamphlet from the 1970s printed by General Atomic stated that "Several commercial fusion reactors are expected to be online by the year 2000."
The Z-pinch phenomenon has been known since the end of the 18th century.[9] Its use in the fusion field comes from research made on toroidal devices, initially in the Los Alamos National Laboratory right from 1952 (Perhapsatron), and in the United Kingdom from 1954 (ZETA), but its physical principles remained for a long time poorly understood and controlled. Pinch devices were studied as potential development paths to practical fusion devices through the 1950s, but studies of the data generated by these devices suggested that instabilities in the collapse mechanism would doom any pinch-type device to power levels that were far too low to suggest continuing along these lines would be practical. Most work on pinch-type devices ended by the 1960s. Recent work on the basic concept started as a result of the appearance of the "wires array" concept in the 1980s, which allowed a more efficient use of this technique. The Sandia National Laboratory runs a continuing wire-array research program with the Zpinch machine. In addition, the University of Washington's ZaP Lab have shown quiescent periods of stability hundreds of times longer than expected for plasma in a Z-pinch configuration, giving promise to the confinement technique.
The technique of implosion of a microcapsule irradiated by laser beams, the basis of laser inertial confinement, was first suggested in 1962 by scientists at Lawrence Livermore National Laboratory, shortly after the invention of the laser itself in 1960. Lasers of the era were very low powered, but low-level research using them nevertheless started as early as 1965. More serious research started in the early 1970s when new types of lasers offered a path to dramatically higher power levels, levels that made inertial-confinement fusion devices appear practical for the first time. Important breakthroughs in this laser technology were made at the Laboratory for Laser Energetics at the University of Rochester, where scientists used frequency-tripling crystals to transform the infrared laser beams into ultraviolet beams. By the late 1970s great strides had been made in laser power, but with each increase new problems were found in the implosion technique that suggested even more power would be required. By the 1980s these increases were so large that using the concept for generating net energy seemed remote. Most research in this field turned to weapons research, always a second line of research, as the implosion concept is somewhat similar to hydrogen bomb operation. Work on very large versions continued as a result, with the very large National Ignition Facility in the US and Laser Mégajoule in France supporting these research programs.
More recent work had demonstrated that significant savings in the required laser energy are possible using a technique known as "fast ignition". The savings are so dramatic that the concept appears to be a useful technique for energy production again, so much so that it is a serious contender for pre-commercial development. There are proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. At the same time, advances in solid state lasers appear to improve the "driver" systems' efficiency by about ten times (to 10- 20%), savings that make even the large "traditional" machines almost practical, and might make the fast ignition concept outpace the magnetic approaches in further development. The laser-based concept has other advantages as well. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would reduce both the frequency of such neutron activations and the rate of irradiation to the core. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated.
Philo T. Farnsworth, the inventor of the first all-electronic television system in 1927, patented his first Fusor design in 1968, a device that uses inertial electrostatic confinement. This system consists largely of two concentric spherical electrical grids inside a vacuum chamber into which a small amount of fusion fuel is introduced. Voltage across the grids causes the fuel to ionize around them, and positively charged ions are accelerated towards the center of the chamber. Those ions may collide and fuse with ions coming from the other direction, may scatter without fusing, or may pass directly through. In the latter two cases, the ions will tend to be stopped by the electric field and re-accelerated toward the center. Fusors can also use ion guns rather than electric grids. Towards the end of the 1960s, Robert Hirsch designed a variant of the Farnsworth Fusor known as the Hirsch-Meeks fusor. This variant is a considerable improvement over the Farnsworth design, and is able to generate neutron flux in the order of one billion neutrons per second. Although the efficiency was very low at first, there were hopes the device could be scaled up, but continued development demonstrated that this approach would be impractical for large machines. Nevertheless, fusion could be achieved using a "lab bench top" type set up for the first time, at minimal cost. This type of fusor found its first application as a portable neutron generator in the late 1990s. An automated sealed reaction chamber version of this device, commercially named Fusionstar was developed by EADS but abandoned in 2001. Its successor is the NSD-Fusion neutron generator.
Robert W. Bussard's Polywell concept is roughly similar to that of the Fusor, but replaces the problematic grid with a magnetically contained electron cloud, which holds the ions in position and provides an accelerating potential. The polywell consists of electromagnet coils arranged in a polyhedral configuration and positively charged to between several 10s and low 100s of kilovolts. This charged magnetic polyhedron is called a MaGrid (Magnetic Grid). Electrons are introduced outside the "quasi-spherical" MaGrid and are accelerated into the MaGrid due to the electric field. Within the MaGrid, magnetic fields confine most of the electrons and those that escape are retained by the electric field. This configuration traps the electrons in the middle of the device, focusing them near the center to produce a virtual cathode (negative electric potential). The virtual cathode accelerates and confines the ions to be fused which, except for minimal losses, never reach the physical structure of the MaGrid. Bussard had reported a fusion rate of 109 per second running D-D fusion reactions at only 12.5 kV (based on detecting a total of nine neutrons in five tests. Bussard claimed a scaled-up version of 2.5-3 m in diameter, would operated at over 100 MW net power (fusion power scales as the fourth power of the B field and the cube of the size)[10]
Focus fusion takes place in a dense plasma focus produced by a dense plasma focus, which typically consists of two coaxial cylindrical electrodes made from copper or beryllium and housed in a vacuum chamber containing a low-pressure gas, which is used as the reactor fuel. An electrical pulse is applied across the electrodes, producing heating and a magnetic field. The current forms the hot gas into many minuscule vortices perpendicular to the surfaces of the electrodes, which then migrate to the end of the inner electrode to pinch-and-twist off as tiny balls of plasma called plasmoids. The electron beam collides with the plasmoid, heating it to fusion temperatures, will in principle yield more energy in the beams than was input to form them.
In April 2005, a team from UCLA announced it had devised a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. However, the process does not generate net power. See Pyroelectric fusion. Such a device would be useful in the same sort of roles as the fusor.
The likelihood of small industrial accidents including the local release of radioactivity and injury to staff cannot be estimated yet. Nevertheless there is no possibility of a catastrophic accident in a fusion reactor resulting in major release of radioactivity to the environment or injury to non-staff, unlike modern fission reactors. The primary reason is that nuclear fusion requires precisely controlled temperature, pressure, and magnetic field parameters to generate net energy. If the reactor were damaged, these parameters would be disrupted and the heat generation in the reactor would rapidly cease. In contrast, the fission products in a fission reactor continue to generate heat through beta-decay for several hours or even days after reactor shut-down, meaning that melting of fuel rods is possible even after the reactor has been stopped due to continued accumulation of heat.
There is also no risk of a runaway reaction in a fusion reactor, since the plasma is normally burnt at optimal conditions, and any significant change will render it unable to produce excess heat. In fusion reactors the reaction process is so delicate that this level of safety is inherent; no elaborate failsafe mechanism is required. Although the plasma in a fusion power plant will have a volume of 1000 cubic meters or more, the density of the plasma is extremely low, and the total amount of fusion fuel in the vessel is very small, typically a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for one or several years, and no additional fuel is necessary to keep the reaction going.
In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to any other industrial accident or an MRI machine quench/explosion, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. The laser-driven inertial approach is generally lower-stress. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure.
Most reactor designs rely on the use of liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. However, calculations suggest that the total amount of tritium and other radioactive gases in a typical power plant would be so small, about 1 kg, that they would have diluted to legally acceptable limits by the time they blew as far as the plant's perimeter fence.[11]
The natural product of the fusion reaction is a small amount of helium, which is completely harmless to life. Of more concern is tritium, which, like other isotopes of hydrogen, is difficult to retain completely. During normal operation, some amount of tritium will be continually released. There would be no acute danger, but the cumulative effect on the world's population from a fusion economy could be a matter of concern . Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, due to tritium's short half-life (12 years), very low decay energy (~14.95 keV), and the fact that it does not bioaccumulate (instead being cycled out of the body as water, with a biological half-life of 7 to 14 days)(refer to Gianni Petrangeli's work "Nuclear Safety", page 226). . Current ITER designs are investigating total containment facilities for any tritium.
The large flux of high-energy neutrons in a reactor will make the structural materials radioactive. The radioactive inventory at shut-down may be comparable to that of a fission reactor, but there are important differences.
The half-life of the radioisotopes produced by fusion tend to be less than those from fission, so that the inventory decreases more rapidly. Unlike fission reactors, whose waste remains radioactive for thousands of years, most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste another 100. Although this waste will be considerably more radioactive during those 50 years than fission waste, the very short half-life makes the process very attractive, as the waste management is fairly straightforward. By 300 years the material would have the same radioactivity as coal ash.[11]
Additionally, the choice of materials used in a fusion reactor is less constrained than in a fission design, where many materials are required for their specific neutron cross-sections. This allows a fusion reactor to be designed using materials that are selected specifically to be "low activation", materials that do not easily become radioactive. Vanadium, for example, would become much less radioactive than stainless steel. Carbon fibre materials are also low-activation, as well as being strong and light, and are a promising area of study for laser-inertial reactors where a magnetic field is not required.
In general terms, fusion reactors would create far less radioactive material than a fission reactor, the material it would create is less damaging biologically, and the radioactivity "burns off" within a time period that is well within existing engineering capabilities.
Although fusion power uses nuclear technology, the overlap with nuclear weapons technology is small. Tritium is a component of the trigger of hydrogen bombs, but not a major problem in production. The copious neutrons from a fusion reactor could be used to breed plutonium for an atomic bomb, but not without extensive redesign of the reactor, so that production would be difficult to conceal. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with the more scientifically developed magnetic confinement fusion.
Large-scale reactors using neutronic fuels (e.g. ITER) and thermal power production (turbine based) are most comparable to fission power from an engineering and economics viewpoint. Both fission and fusion power plants involve a relatively compact heat source powering a conventional steam turbine-based power plant, while producing enough neutron radiation to make activation of the plant materials problematic. The main distinction is that fusion power produces no high-level radioactive waste (though activated plant materials still need to be disposed of). There are some power plant ideas which may significantly lower the cost or size of such plants; however, research in these areas is nowhere near as advanced as in tokamaks.
Fusion power commonly proposes the use of deuterium, an isotope of hydrogen, as fuel and in many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 x 1020 J/yr) and that this does not increase in the future, then the known current lithium reserves would last 3000 years, lithium from sea water would last 60 million years, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years.[12] To put this in context, 150 billion years is over ten times the currently measured age of the universe, and is close to 30 times the remaining lifespan of the sun.[13]
Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion:
The first human-made, large-scale fusion reaction was the test of the hydrogen bomb, Ivy Mike, in 1952. As part of the PACER project, it was once proposed to use hydrogen bombs as a source of power by detonating them in underground caverns and then generating electricity from the heat produced, but such a power plant is unlikely ever to be constructed, for a variety of reasons. Controlled thermonuclear fusion (CTF) refers to the alternative of continuous power production, or at least the use of explosions that are so small that they do not destroy a significant portion of the machine that produces them.
To produce self-sustaining fusion, the energy released by the reaction (or at least a fraction of it) must be used to heat new reactant nuclei and keep them hot long enough that they also undergo fusion reactions. Retaining the heat is called energy confinement and may be accomplished in a number of ways.
The hydrogen bomb really has no confinement at all. The fuel is simply allowed to fly apart, but it takes a certain length of time to do this, and during this time fusion can occur. This approach is called inertial confinement. If more than milligram quantities of fuel are used (and efficiently fused), the explosion would destroy the machine, so theoretically, controlled thermonuclear fusion using inertial confinement would be done using tiny pellets of fuel which explode several times a second. To induce the explosion, the pellet must be compressed to about 30 times solid density with energetic beams. If the beams are focused directly on the pellet, it is called direct drive, which can in principle be very efficient, but in practice it is difficult to obtain the needed uniformity. An alternative approach is indirect drive, in which the beams heat a shell, and the shell radiates x-rays, which then implode the pellet. The beams are commonly laser beams, but heavy and light ion beams and electron beams have all been investigated.
Inertial confinement produces plasmas with impressively high densities and temperatures, and appears to be best suited to weapons research, X-ray generation, very small reactors, and perhaps in the distant future, spaceflight. They require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave to produce the high-density plasma, and in practice these have proven difficult to produce. A recent development in the field of laser induced ICF is the use of ultrashort pulse multi-petawatt lasers to heat the plasma of an imploding pellet at exactly the moment of greatest density after it is imploded conventionally using terawatt scale lasers. This research will be carried out on the (currently being built) OMEGA EP petawatt and OMEGA lasers at the University of Rochester and at the GEKKO XII laser at the institute for laser engineering in Osaka Japan, which if fruitful, may have the effect of greatly reducing the cost of a laser fusion based power source.
At the temperatures required for fusion, the fuel is in the form of a plasma with very good electrical conductivity. This opens the possibility to confine the fuel and the energy with magnetic fields, an idea known as magnetic confinement. The Lorenz force works only perpendicular to the magnetic field, so that the first problem is how to prevent the plasma from leaking out the ends of the field lines. There are basically two solutions.
The first is to use the magnetic mirror effect. If particles following a field line encounter a region of higher field strength, then some of the particles will be stopped and reflected. Advantages of a magnetic mirror power plant would be simplified construction and maintenance due to a linear topology and the potential to apply direct conversion in a natural way, but the confinement achieved in the experiments was so poor that this approach has been essentially abandoned.
The second possibility to prevent end losses is to bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed system of this type is the tokamak, with the stellarator being next most advanced, followed by the Reversed field pinch. Compact toroids, especially the Field-Reversed Configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Compact toroids still have some enthusiastic supporters but are not backed as readily by the majority of the fusion community.
Finally, there are also electrostatic confinement fusion systems, in which ions in the reaction chamber are confined and held at the center of the device by electrostatic forces, as in the Farnsworth-Hirsch Fusor, which is not believed to be able to be developed into a power plant. The Polywell, an advanced variant of the fusor, has shown a degree of research interest as of late; however, the technology is relatively immature, and major scientific and engineering questions remain which researchers under the auspices of the U.S. Office of Naval Research hope to further investigate.
A more subtle technique is to use more unusual particles to catalyse fusion. The best known of these is Muon-catalyzed fusion which uses muons, which behave somewhat like electrons and replace the electrons around the atoms. These muons allow atoms to get much closer and thus reduce the kinetic energy required to initiate fusion. Muons require more energy to produce than can be obtained from muon-catalysed fusion, making this approach impractical for the generation of power.
Some scientists have reported excess heat, neutrons, tritium, helium and other nuclear effects in so-called cold fusion systems. In 2004, a peer review panel was commissioned by the U.S. Department of Energy to study these claims.[14] This identified basic areas of research which were necessary for acceptance of the idea, but did not recommend a federally funded program.
Research into sonoluminescence induced fusion, sometimes known as "bubble fusion", also continues, although it is met with as much skepticism as cold fusion is by most of the scientific community.
In fusion research, achieving a fusion energy gain factor Q = 1 is called breakeven and is considered a significant although somewhat artificial milestone. Ignition refers to an infinite Q, that is, a self-sustaining plasma where the losses are made up for by fusion power without any external input. In a practical fusion reactor, some external power will always be required for things like current drive, refueling, profile control, and burn control. A value on the order of Q = 20 will be required if the plant is to deliver much more energy than it uses internally.
There have been many design studies for fusion power plants. Despite many differences, there are several systems that are common to most. To begin with, a fusion power plant, like a fission power plant, is customarily divided into the nuclear island and the balance of plant. The balance of plant is the conventional part that converts high-temperature heat into electricity via steam turbines. It is much the same in a fusion power plant as in a fission or coal power plant. In a fusion power plant, the nuclear island has a plasma chamber with an associated vacuum system, surrounded by plasma-facing components (first wall and divertor) maintaining the vacuum boundary and absorbing the thermal radiation coming from the plasma, surrounded in turn by a blanket where the neutrons are absorbed to breed tritium and heat a working fluid that transfers the power to the balance of plant. If magnetic confinement is used, a magnet system, using primarily cryogenic superconducting magnets, is needed, and usually systems for heating and refueling the plasma and for driving current. In inertial confinement, a driver (laser or accelerator) and a focusing system are needed, as well as a means for forming and positioning the pellets.
Although the standard solution for electricity production in fusion power plant designs is conventional steam turbines using the heat deposited by neutrons, there are also designs for direct conversion of the energy of the charged particles into electricity. These are of little value with a D-T fuel cycle, where 80% of the power is in the neutrons, but are indispensable with aneutronic fusion, where less than 1% is. Direct conversion has been most commonly proposed for open-ended magnetic configurations like magnetic mirrors or Field-Reversed Configurations, where charged particles are lost along the magnetic field lines, which are then expanded to convert a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. Typically the claimed conversion efficiency is in the range of 80%, but the converter may approach the reactor itself in size and expense.
Developing materials for fusion reactors has long been recognized as a problem nearly as difficult and important as that of plasma confinement, but it has received only a fraction of the attention. The neutron flux in a fusion reactor is expected to be about 100 times that in existing pressurized water reactors (PWR). Each atom in the blanket of a fusion reactor is expected to be hit by a neutron and displaced about a hundred times before the material is replaced. Furthermore the high-energy neutrons will produce hydrogen and helium in various nuclear reactions that tends to form bubbles at grain boundaries and result in swelling, blistering or embrittlement. One also wishes to choose materials whose primary components and impurities do not result in long-lived radioactive wastes. Finally, the mechanical forces and temperatures are large, and there may be frequent cycling of both.
The problem is exacerbated because realistic material tests must expose samples to neutron fluxes of a similar level for a similar length of time as those expected in a fusion power plant. Such a neutron source is nearly as complicated and expensive as a fusion reactor itself would be. Proper materials testing will not be possible in ITER, and a proposed materials testing facility, IFMIF, was still at the design stage in 2005.
The material of the plasma facing components (PFC) is a special problem. The PFC do not have to withstand large mechanical loads, so neutron damage is much less of an issue. They do have to withstand extremely large thermal loads, up to 10 MW/m², which is a difficult but solvable problem. Regardless of the material chosen, the heat flux can only be accommodated without melting if the distance from the front surface to the coolant is not more than a centimeter or two. The primary issue is the interaction with the plasma. One can choose either a low-Z material, typified by graphite although for some purposes beryllium might be chosen, or a high-Z material, usually tungsten with molybdenum as a second choice. Use of liquid metals (lithium, gallium, tin) has also been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates.
If graphite is used, the gross erosion rates due to physical and chemical sputtering would be many meters per year, so one must rely on redeposition of the sputtered material. The location of the redeposition will not exactly coincide with the location of the sputtering, so one is still left with erosion rates that may be prohibitive. An even larger problem is the tritium co-deposited with the redeposited graphite. The tritium inventory in graphite layers and dust in a reactor could quickly build up to many kilograms, representing a waste of resources and a serious radiological hazard in case of an accident. The consensus of the fusion community seems to be that graphite, although a very attractive material for fusion experiments, cannot be the primary PFC material in a commercial reactor.
The sputtering rate of tungsten can be orders of magnitude smaller than that of carbon, and tritium is not so easily incorporated into redeposited tungsten, making this a more attractive choice. On the other hand, tungsten impurities in a plasma are much more damaging than carbon impurities, and self-sputtering of tungsten can be high, so it will be necessary to ensure that the plasma in contact with the tungsten is not too hot (a few tens of eV rather than hundreds of eV). Tungsten also has disadvantages in terms of eddy currents and melting in off-normal events, as well as some radiological issues.
While fusion power is still in early stages of development, substantial sums have been and continue to be invested in research. In the EU almost € 10 billion was spent on fusion research up to the end of the 1990s, and the new ITER reactor alone is budgeted at € 10 billion. It is estimated that up to the point of possible implementation of electricity generation by nuclear fusion, R&D will need further promotion totalling around € 60-80 billion over a period of 50 years or so (of which € 20-30 billion within the EU).[15] Nuclear fusion research receives € 750 million (excluding ITER funding), compared with € 810 million for all non-nuclear energy research combined,[16] putting research into fusion power well ahead of that of any single rivaling technology.
Fusion power would provide much more energy for a given weight of fuel than any technology currently in use,[17] and the fuel itself (primarily deuterium) exists abundantly in the Earth's ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium.[18] Although this may seem a low proportion (about 0.015%), because nuclear fusion reactions are so much more energetic than chemical combustion and seawater is easier to access and more plentiful than fossil fuels, some experts estimate that fusion could supply the world's energy needs for millions of years.[19][20]
An important aspect of fusion energy in contrast to many other energy sources is that the cost of production does not suffer from diseconomies of scale. The cost of wind energy, for example, goes up as the optimal locations are developed first, while further generators must be sited in less ideal conditions. With fusion energy, the production cost will not increase much, even if large numbers of plants are built. It has been suggested that even 100 times the current energy consumption of the world is possible.
Some problems which are expected to be an issue in this century such as fresh water shortages can actually be regarded merely as problems of energy supply. For example, in desalination plants, seawater can be purified through distillation or reverse osmosis. However, these processes are energy intensive. Even if the first fusion plants are not competitive with alternative sources, fusion could still become competitive if large scale desalination requires more power than the alternatives are able to provide. Further as refining suggested fusion fuels (deuterium, and tritium) via distillation of hydrogen or electrolysis from seawater would produce a waste product of pure hydrogen the fusion plants themselves could produce a small amount of drinking water by reclaiming the lost energy. At perfect conditions this would be to produce 1g deuterium per 30 kg of water worth of hydrogen..
Despite being technically non-renewable, fusion power has many of the benefits of long-term renewable energy sources (such as being a sustainable energy supply compared to presently utilized sources and emitting no greenhouse gases) as well as some of the benefits of the much more limited energy sources as hydrocarbons and nuclear fission (without reprocessing). Like these currently dominant energy sources, fusion could provide very high power-generation density and uninterrupted power delivery (due to the fact that it is not dependent on the weather, unlike wind and solar power).
Despite optimism dating back to the 1950s about the wide-scale harnessing of fusion power, there are still significant barriers standing between current scientific understanding and technological capabilities and the practical realization of fusion as an energy source. Research, while making steady progress, has also continually thrown up new difficulties. Therefore it remains unclear whether an economically viable fusion plant is possible.[21][22] A 2006 editorial in New Scientist magazine opined that "if commercial fusion is viable, it may well be a century away."[22] Interestingly, a pamphlet printed by General Atomics in 1970s stated that "By the year 2000, several commercial fusion reactors are expected to be on-line."
Several fusion D-T burning tokamak test devices have been built (TFTR, JET), but these were not built to produce more thermal energy than electrical energy consumed. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to commercialize fusion power.
A paper published in January 2009 and part of the IAEA Fusion Conference Proceedings at Geneva last October claims that small 50 MW Tokamak style reactors are feasible.[23]
On May 30, 2009, the US Lawrence Livermore National Laboratory (LLNL), primarily a weapons lab, announced the creation of a high-energy laser system, the National Ignition Facility, which can heat hydrogen atoms to temperatures only existing in nature in the cores of stars. The new laser is expected to have the ability to produce, for the first time, more energy from controlled, inertially confined nuclear fusion than was required to initiate the reaction.[24]
On January 28, 2010, the LLNL announced tests using all 192 laser beams, although with lower laser energies, smaller hohlraum targets, and substitutes for the fusion fuel capsules.[25][26] More than one megajoule of ultraviolet energy was fired into the hohlraum, besting the previous world record by a factor of more than 30. The results gave the scientists confidence that they will be able to achieve ignition in more realistic tests scheduled to begin in the summer of 2010.[27]
|
|
|
|